5 research outputs found

    Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective

    Get PDF
    The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich\u27s early work (PhD Thesis \u2788) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO \u2717). Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has NP\mathsf{NP}-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes NPcoNP\mathsf{NP} \cap \mathsf{coNP} or SZK\mathsf{SZK}, respectively, from a black-box perspective. In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class MIPcoMIP\mathsf{MIP} \cap \mathsf{coMIP} from a black-box perspective. Conceptually, given that MIP=NEXP\mathsf{MIP} = \mathsf{NEXP}, our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle. Already the cases of languages that have IP\mathsf{IP} or AM\mathsf{AM} proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of IP\mathsf{IP}, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of AM\mathsf{AM}, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have NP\mathsf{NP}-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system

    An Information-Theoretic Proof of the Streaming Switching Lemma for Symmetric Encryption

    Get PDF
    Motivated by a fundamental paradigm in cryptography, we consider a recent variant of the classic problem of bounding the distinguishing advantage between a random function and a random permutation. Specifically, we consider the problem of deciding whether a sequence of qq values was sampled uniformly with or without replacement from [N][N], where the decision is made by a streaming algorithm restricted to using at most ss bits of internal memory. In this work, the distinguishing advantage of such an algorithm is measured by the KL divergence between the distributions of its output as induced under the two cases. We show that for any s=Ω(logN)s=\Omega(\log N) the distinguishing advantage is upper bounded by O(qs/N)O(q \cdot s / N), and even by O(qs/NlogN)O(q \cdot s / N \log N) when qN1ϵq \leq N^{1 - \epsilon} for any constant ϵ>0\epsilon > 0 where it is nearly tight with respect to the KL divergence

    Tight Tradeoffs in Searchable Symmetric Encryption

    Get PDF
    A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT \u2714) and Asharov et al. (STOC \u2716) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed. We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the ``pad-and-split\u27\u27 framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality LL must use space Ω(NlogN/logL)\Omega ( N \log N / \log L ) for databases of size NN. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD \u2717) which is captured by our pad-and-split framework. Then, within the ``statistical-independence\u27\u27 framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive O(logloglogN)O(\log \log \log N) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with n=N1ϵ(n)n = N^{1 - \epsilon(n)} document identifiers, the read efficiency is ω(1)ϵ(n)1+O(logloglogN)\omega(1) \cdot \epsilon(n)^{-1}+ O(\log\log\log N) when retrieving its identifiers (where the ω(1)\omega(1) term may be arbitrarily small, and ω(1)ϵ(n)1\omega(1) \cdot \epsilon(n)^{-1} is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most N11/o(logloglogN)N^{1 - 1/o(\log \log \log N)} document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency O(logloglogN)O(\log \log \log N) when retrieving its identifiers

    Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two-Dimensional Balanced Allocations

    Get PDF
    Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size NN, under the modest assumption that no keyword appears in more than N11/loglogNN^{1 - 1/\log\log N} documents, we construct a scheme with read efficiency O~(loglogN)\tilde{O}(\log \log N). This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT \u2714) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency O~(logN)\tilde{O}(\log N). Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (``balls and bins\u27\u27) problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques

    Can PPAD hardness be based on standard cryptographic assumptions?

    Get PDF
    We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness. Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink. Within the framework of black-box reductions, we prove the following results: (i) average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness. (ii) Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness. (iii) Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution. Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly exponential number of solutions
    corecore